Goto

Collaborating Authors

 handwritten character


BMI Prediction from Handwritten English Characters Using a Convolutional Neural Network

Diba, N. T., Akter, N., Chowdhury, S. A. H., Giti, J. E.

arXiv.org Artificial Intelligence

A person's Body Mass Index, or BMI, is the most widely used parameter for assessing their health. BMI is a crucial predictor of potential diseases that may arise at higher body fat levels because it is correlated with body fat. Conversely, a community's or an individual's nutritional status can be determined using the BMI. Although deep learning models are used in several studies to estimate BMI from face photos and other data, no previous research established a clear connection between deep learning techniques for handwriting analysis and BMI prediction. This article addresses this research gap with a deep learning approach to estimating BMI from handwritten characters by developing a convolutional neural network (CNN). A dataset containing samples from 48 people in lowercase English scripts is successfully captured for the BMI prediction task. The proposed CNN-based approach reports a commendable accuracy of 99.92%. Performance comparison with other popular CNN architectures reveals that AlexNet and InceptionV3 achieve the second and third-best performance, with the accuracy of 99.69% and 99.53%, respectively.


Bengali Handwritten Digit Recognition using CNN with Explainable AI

Shawon, Md Tanvir Rouf, Tanvir, Raihan, Alam, Md. Golam Rabiul

arXiv.org Artificial Intelligence

Handwritten character recognition is a hot topic for research nowadays. If we can convert a handwritten piece of paper into a text-searchable document using the Optical Character Recognition (OCR) technique, we can easily understand the content and do not need to read the handwritten document. OCR in the English language is very common, but in the Bengali language, it is very hard to find a good quality OCR application. If we can merge machine learning and deep learning with OCR, it could be a huge contribution to this field. Various researchers have proposed a number of strategies for recognizing Bengali handwritten characters. A lot of ML algorithms and deep neural networks were used in their work, but the explanations of their models are not available. In our work, we have used various machine learning algorithms and CNN to recognize handwritten Bengali digits. We have got acceptable accuracy from some ML models, and CNN has given us great testing accuracy. Grad-CAM was used as an XAI method on our CNN model, which gave us insights into the model and helped us detect the origin of interest for recognizing a digit from an image.


Machine Learning For Grannies

#artificialintelligence

She just finished over-feeding you with her delicious food, and now wants you to fix her Skype account. "It's not working" she complains. Turns out she somehow managed to get the trojan horse virus. You restore her computer, create a new Skype account and everything is fine. "Tell me what you're doing in school!" she asks.


High school students helped an AI learn to read old handwritten texts

#artificialintelligence

In Italy, 120 high school students helped solve a centuries-old problem: how to give researchers access to the Vatican Secret Archives, a massive collection of documents detailing the Vatican's activities as far back as the eighth century. That should look pretty great on their college applications. The shelves of the Vatican Secret Archives are about 85 kilometers (53 miles) long and house 35,000 volumes of catalogues. But the documents that researchers have scanned and uploaded take up less than an inch. That's because the Vatican seems to not have wanted to share the information.


Fully Convolutional Network Based Skeletonization for Handwritten Chinese Characters

Wang, Tie-Qiang (Institute of Automation, Chinese Academy of Science) | Liu, Cheng-Lin (Institute of Automation, Chinese Academy of Science)

AAAI Conferences

Structural analysis of handwritten characters relies heavily on robust skeletonization of strokes, which has not been solved well by previous thinning methods. This paper presents an effective fully convolutional network (FCN) to extract stroke skeletons for handwritten Chinese characters. We combine the holistically-nested architecture with regressive dense upsampling convolution (rDUC) and recently proposed hybrid dilated convolution (HDC) to generate pixel-level prediction for skeleton extraction. We evaluate our method on character images synthesized from the online handwritten dataset CASIA-OLHWDB and achieve higher accuracy of skeleton pixel detection than traditional thinning algorithms. We also conduct skeleton based character recognition experiments using convolutional neural network (CNN) classifiers on offline/online handwritten datasets, and obtained comparable accuracies with recognition on original character images. This implies the skeletonization loses little shape information.


Computer Learns to Write Its ABCs

AITopics Original Links

A new computer model can now mimic the human ability to learn new concepts from a single example instead of the hundreds or thousands of examples it takes other machine learning techniques, researchers say. The new model learned how to write invented symbols from the animated show Futurama as well as dozens of alphabets from across the world. It also showed it could invent symbols of its own in the style of a given language. The researchers suggest their model could also learn other kinds of concepts, such as speech and gestures. Although scientists have made great advances in machine learning in recent years, people remain much better at learning new concepts than machines.


A.I. Software Learns a Simple Task Like a Human

AITopics Original Links

Three of the contenders, from left to right: Virginia Tech's THOR, DARPA's test platform robot made by Boston Dynamics and Raytheon's Guardian. Scientists have invented a machine that imitates the way the human brain learns new information, a step forward for artificial intelligence, researchers reported. The system described in the journal Science is a computer model "that captures humans' unique ability to learn new concepts from a single example," the study said. "Though the model is only capable of learning handwritten characters from alphabets, the approach underlying it could be broadened to have applications for other symbol-based systems, like gestures, dance moves, and the words of spoken and signed languages." Joshua Tenenbaum, a professor at the Massachusetts Institute for Technology (MIT), said he wanted to build a machine that could mimic the mental abilities of young children.


This AI Algorithm Learns Simple Tasks as Fast as We Do

#artificialintelligence

Taking inspiration from the way humans seem to learn, scientists have created AI software capable of picking up new knowledge in a far more efficient and sophisticated way. The new AI program can recognize a handwritten character about as accurately as a human can, after seeing just a single example. The best existing machine-learning algorithms, which employ a technique called deep learning, need to see many thousands of examples of a handwritten character in order to learn the difference between an A and a Z. The software was developed by Brenden Lake, a researcher at New York University, together with Ruslan Salakhutdinov, an assistant professor of computer science at the University of Toronto, and Joshua Tenenbaum, a professor in the Department of Brain and Cognitive Sciences at MIT. Details of the program, and the ideas behind it, are published today in the journal Science.


Now Artificial intelligence machines can learn as human – Cape Coral Science Centre - Albany Daily Star Gazette

#artificialintelligence

One Day, Robots may take over the world from us, leaving humanity to wonder when artificial intelligence (AI) became too powerful. That horrible scenario is unlikely in the near term because humans have a major advantage over machines: the ability to learn. But that gap between human and robots may decrease slowly in future, Artificial intelligence has capable learning now. Today's most sophisticated AI systems rely on learning from tens to hundreds of examples, whereas humans can learn from a few or even one. Taking inspiration from the way humans seem to learn, scientists have created AI software capable of picking up new knowledge in a far more efficient and sophisticated way.


Classification error in multiclass discrimination from Markov data

Christensen, Sören, Irle, Albrecht, Willert, Lars

arXiv.org Machine Learning

As a model for an on-line classification setting we consider a stochastic process $(X_{-n},Y_{-n})_{n}$, the present time-point being denoted by 0, with observables $ \ldots,X_{-n},X_{-n+1},\ldots, X_{-1}, X_0$ from which the pattern $Y_0$ is to be inferred. So in this classification setting, in addition to the present observation $X_0$ a number $l$ of preceding observations may be used for classification, thus taking a possible dependence structure into account as it occurs e.g. in an ongoing classification of handwritten characters. We treat the question how the performance of classifiers is improved by using such additional information. For our analysis, a hidden Markov model is used. Letting $R_l$ denote the minimal risk of misclassification using $l$ preceding observations we show that the difference $\sup_k |R_l - R_{l+k}|$ decreases exponentially fast as $l$ increases. This suggests that a small $l$ might already lead to a noticeable improvement. To follow this point we look at the use of past observations for kernel classification rules. Our practical findings in simulated hidden Markov models and in the classification of handwritten characters indicate that using $l=1$, i.e. just the last preceding observation in addition to $X_0$, can lead to a substantial reduction of the risk of misclassification. So, in the presence of stochastic dependencies, we advocate to use $ X_{-1},X_0$ for finding the pattern $Y_0$ instead of only $X_0$ as one would in the independent situation.